25 research outputs found

    Algorithmic Impact Assessments Under the GDPR: Producing Multi-Layered Explanations

    Get PDF
    Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights.After noting the potential shortcomings of DPIAs, this paper closes with a call — and some suggestions — for a Model Algorithmic Impact Assessment in the context of the GDPR. Our examination of DPIAs suggests that the current focus on the right to explanation is too narrow. We call, instead, for data controllers to consciously use the required DPIA process to produce what we call “multi-layered explanations” of algorithmic systems. This concept of multi-layered explanations not only more accurately describes what the GDPR is attempting to do, but also normatively better fills potential gaps between the GDPR’s two approaches to algorithmic accountability

    Algorithmic Impact Assessments Under the GDPR: Producing Multi-Layered Explanations

    Get PDF
    Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.The GDPR contains an array of systemic accountability tools. Of these tools, impact assessments (Art. 35) have recently received particular attention on both sides of the Atlantic, as a means of implementing algorithmic accountability at early stages of design, development, and training. The aim of this paper is to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. We address the relationship between DPIAs and individual transparency rights. We propose, too, that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” (Art. 22) of individual rights.After noting the potential shortcomings of DPIAs, this paper closes with a call — and some suggestions — for a Model Algorithmic Impact Assessment in the context of the GDPR. Our examination of DPIAs suggests that the current focus on the right to explanation is too narrow. We call, instead, for data controllers to consciously use the required DPIA process to produce what we call “multi-layered explanations” of algorithmic systems. This concept of multi-layered explanations not only more accurately describes what the GDPR is attempting to do, but also normatively better fills potential gaps between the GDPR’s two approaches to algorithmic accountability

    Vulnerable data subjects

    Get PDF
    Discussion about vulnerable individuals and communities spread from research ethics to consumer law and human rights. According to many theoreticians and practitioners, the framework of vulnerability allows formulating an alternative language to articulate problems of inequality, power imbalances and social injustice. Building on this conceptualisation, we try to understand the role and potentiality of the notion of vulnerable data subjects. The starting point for this reflection is wide-ranging development, deployment and use of data-driven technologies that may pose substantial risks to human rights, the rule of law and social justice. Implementation of such technologies can lead to discrimination systematic marginalisation of different communities and the exploitation of people in particularly sensitive life situations. Considering those problems, we recognise the special role of personal data protection and call for its vulnerability-aware interpretation. This article makes three contributions. First, we examine how the notion of vulnerability is conceptualised and used in the philosophy, human rights and European law. We then confront those findings with the presence and interpretation of vulnerability in data protection law and discourse. Second, we identify two problematic dichotomies that emerge from the theoretical and practical application of this concept in data protection. Those dichotomies reflect the tensions within the definition and manifestation of vulnerability. To overcome limitations that arose from those two dichotomies we support the idea of layered vulnerability, which seems compatible with the GDPR and the risk-based approach. Finally, we outline how the notion of vulnerability can influence the interpretation of particular provisions in the GDPR. In this process, we focus on issues of consent, Data Protection Impact Assessment, the role of Data Protection Authorities, and the participation of data subjects in the decision making about data processing

    Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions

    Full text link
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. Our goal is to put forward a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 27 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders

    Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions

    Get PDF
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. Our goal is to put forward a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 27 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders

    Mental data protection and the GDPR

    No full text
    Although decoding the content of mental states is currently unachievable, technologies such as neural interfaces, affective computing systems, and digital behavioral technologies enable increasingly reliable statistical associations between certain data patterns and mental activities such as memories, intentions, and emotions. Furthermore, Artificial Intelligence enables the exploration of these activities not just retrospectively but also in a real-time and predictive manner. In this article, we introduce the notion of 'mental data', defined as any data that can be organized and processed to make inferences about the mental states of a person, including their cognitive, affective and conative states. Further, we analyze existing legal protections for mental data by considering the lawfulness of their processing in light of different legal bases and purposes, with special focus on the EU General Data Protection Regulation (GDPR). We argue that the GDPR is an adequate tool to mitigate risks related to mental data processing. However, we recommend that interpreters focus on processing characteristics, rather than merely on the category of data at issue. Finally, we call for a 'Mental Data Protection Impact Assessment', a specific data protection impact assessment designed to better assess and mitigate the risks to fundamental rights and freedoms associated with the processing of mental data

    "Intellectual Privacy": Trade Secrets and the Propertization of Consumers' Personal Data in the EU

    No full text
    This paper attempts to find a much-needed balance between data protection rights and trade secret rights on customer information in the European Union framework. Our analysis proposes a “shared management” of secret data between businesses and customers based on an operation of de-contextualization of customer databases. Several rights are in conflict in these two legal domains. For instance, the right to access to personal data and the new proposed right to “data portability” conflict with the interests of trade secret holders. What is even more problematic is that both analyzed legal frameworks are more and more based on a “proprietary” approach to data: they are both a form of abstract “monopoly”. In a first step we analyze, in comparison with USA law, when and how the scope of data protection and trade secret protection coincide in practice, according to the proposed EU reforms in the field. As illustrated by literature, balancing rules in these two frameworks are vague and schizophrenic. However, from a literal interpretation of the analyzed it is possible to understand a “favor” for data protection rights. In analyzing the apparent favor to personality rights compared to other (e.g., economic) rights we investigate (both in USA and selected European states) trade secrets in the perspective of personality rights and data protection rights. As a result of our study we propose a change in perspective from the contrast between customers and businesses, to the conflict between customers and businessmen that enables us to verify whether and when personality rights of data subjects affect the above-mentioned personality rights of businessman in practice. the paper proposes to “decontextualize” secret data so that customers can access only data strictly related to their biographical information while trade secret holders can be free not to disclose the output of their data processing (behavior evaluation, forecast, studies on life expectancy, personalized marketing plan, pricing, etc.) if disclosure can adversely affect their interests. In this framework the “proprietary” approach of European laws must be caught as an opportunity, not as an obstacle: we can consider secret data as a “shared good” of customers and businessmen. A multi-level management of data should be based on interests that are common to customers and trade secret holders (secrecy and data updating)
    corecore